Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This tutorial provides practical training in designing and conduct- ing online user experiments with recommender systems, and in statistically analyzing the results of such experiments. It covers the development of a research question and hypotheses, the selection of study participants, the manipulation of system aspects and mea- surement of behaviors, perceptions and user experiences, and the evaluation of subjective measurement scales and study hypotheses. Interested parties can find the slides, example datset, and other resources at https://www.usabart.nl/QRMS/.more » « less
-
While substantial advances have been made in recommender systems -- both in general and for news -- using datasets, offline analyses, and one-shot experiments, longitudinal studies of real users remain the gold standard, and the only way to effectively measure the impact of recommender system designs (algorithmic and otherwise) on long-term user experience and behavior. While such infrastructure exists for studies within some individual organizations, the extensive cost and effort to build the systems, content streams, and user base make it prohibitive for most researchers to conduct such studies. We propose to develop shared research infrastructure for the research community, and have received funding to gather community input on requirements, resources, and research goals for such an infrastructure. If the full infrastructure proposal is funded, it would result in recruiting a community of thousands of users who agree to use a news delivery application within which various researchers would be install and conduct experiments. In this short paper we outline what we have heard and learned so far and present a set of questions to be directed to INRA attendees to gather their feedback at the workshop.more » « less
-
null (Ed.)Recommendation and ranking systems are known to suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail, less popular, items. The effectiveness of these approaches is often assessed using different metrics to evaluate the extent to which over-concentration on popular items is reduced. However, not much attention has been given to the user-centered evaluation of this bias; how different users with different levels of interest towards popular items are affected by such algorithms. In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users’ perspective and we propose a new metric that can address these limitations. In addition, we present an effective approach that mitigates popularity bias from the user-centered point of view. Finally, we investigate several state-of-the-art approaches proposed in recent years to mitigate popularity bias and evaluate their performances using the existing metrics and also from the users’ perspective. Our experimental results using two publicly-available datasets show that existing popularity bias mitigation techniques ignore the users’ tolerance towards popular items. Our proposed user-centered method can tackle popularity bias effectively for different users while also improving the existing metrics.more » « less
An official website of the United States government

Full Text Available